35 research outputs found

    Stochastic Primal-Dual Coordinate Method for Nonlinear Convex Cone Programs

    Full text link
    Block coordinate descent (BCD) methods and their variants have been widely used in coping with large-scale nonconstrained optimization problems in many fields such as imaging processing, machine learning, compress sensing and so on. For problem with coupling constraints, Nonlinear convex cone programs (NCCP) are important problems with many practical applications, but these problems are hard to solve by using existing block coordinate type methods. This paper introduces a stochastic primal-dual coordinate (SPDC) method for solving large-scale NCCP. In this method, we randomly choose a block of variables based on the uniform distribution. The linearization and Bregman-like function (core function) to that randomly selected block allow us to get simple parallel primal-dual decomposition for NCCP. The sequence generated by our algorithm is proved almost surely converge to an optimal solution of primal problem. Two types of convergence rate with different probability (almost surely and expected) are also obtained. The probability complexity bound is also derived in this paper

    An Augmented Lagrangian Approach to Conically Constrained Non-monotone Variational Inequality Problems

    Full text link
    In this paper we consider a non-monotone (mixed) variational inequality model with (nonlinear) convex conic constraints. Through developing an equivalent Lagrangian function-like primal-dual saddle-point system for the VI model in question, we introduce an augmented Lagrangian primal-dual method, to be called ALAVI in the current paper, for solving a general constrained VI model. Under an assumption, to be called the primal-dual variational coherence condition in the paper, we prove the convergence of ALAVI. Next, we show that many existing generalized monotonicity properties are sufficient -- though by no means necessary -- to imply the above mentioned coherence condition, thus are sufficient to ensure convergence of ALAVI. Under that assumption, we further show that ALAVI has in fact an o(1/k)o(1/\sqrt{k}) global rate of convergence where kk is the iteration count. By introducing a new gap function, this rate further improves to be O(1/k)O(1/k) if the mapping is monotone. Finally, we show that under a metric subregularity condition, even if the VI model may be non-monotone the local convergence rate of ALAVI improves to be linear. Numerical experiments on some randomly generated highly nonlinear and non-monotone VI problems show practical efficacy of the newly proposed method
    corecore